1,527 research outputs found

    Zipper-based embedding of modern attribute grammar extensions

    Get PDF
    This research abstract describes the research plan for a Ph.D project. We plan to define a powerful and elegant embedding of modern extensions to attribute grammars. Attribute grammars are a suitable formalism to express complex, multiple traversal algorithms. In recent years there has been a lot of work in attribute grammars, namely by defining new extensions to the formalism (forwarding and reference attribute grammars, etc), by proposing new attribute evaluation models (lazy and circular evaluators, etc) and by embedding attribute grammars (like first class attribute grammars). We will study how to design such extensions through a zipper-based embedding and we will study eficient evaluation models for this embedding. Finally, we will express several attribute grammars in our setting and we will analyse the performance of our implementation.(undefined

    Recuperação do isopentano da gasolina leve

    Get PDF
    Mestrado em Engenharia QuímicaEste trabalho teve como objectivo, o estudo da viabilidade de recuperação do isopentano da gasolina leve produzida nas unidades 1200 e 3000 da Refinaria de Matosinhos. O isopentano apresenta um valor elevado (92,3) de índice de octanas (RON) e por isso pode ser posteriormente incorporado na gasolina. Para fazer esta separação optou-se por uma destilação fraccionada. Fez-se a simulação do processo numa coluna de destilação existente na refinaria usando o simulador Aspen Plus. O isopentano recuperado terá que ser armazenado durante 10 a 15 dias, razão pela qual também foi feito o dimensionamento de um tanque de armazenamento para este produto. Na simulação realizada para a separação do isopentano da gasolina leve proveniente da unidade 3000, para um caudal de 408 ton/dia (450 short tons/day), considerou-se que a alimentação entrava à pressão de 4,5 bar e à temperatura de 20ºC As pressões no condensador e no reebulidor foram de 4 bar e 4,2 bar, respectivamente (dados fornecidos pela Refinaria de Matosinhos). A razão de refluxo utilizada foi de 10 e a alimentação à coluna de destilação foi feita no prato 64. Nestas condições conseguiu-se uma recuperação de isopentano de 93,5% obtendo-se 115,9 ton/dia de produto de topo com um RON de 90,3. Na simulação referente à separação do isopentano da gasolina leve proveniente da unidade 1200, para um caudal de 266 ton/dia (293 short tons/day), consideraram-se as mesmas condições operatórias usadas na simulação anterior, excepto a razão de refluxo utilizada que foi de 15. Os resultados obtidos mostram uma recuperação de isopentano de 92,4% e que o caudal de produto de topo é de 30,1 ton/dia tendo um RON de 91,2. Na simulação realizada para a separação do isopentano da gasolina leve resultante da mistura das correntes das unidades 3000 e 1200, usou-se um caudal de 674 ton/dia (743 short tons/day), e as mesmas condições operatórias da unidade 3000. Atingiu-se uma recuperação de 91,2% do isopentano, o caudal de produto de topo foi de 142,4 ton/dia e apresenta um RON de 90,5. O volume do tanque dimensionado é de 6100 m3 e apresenta um custo de 1.153.112€. Num cenário em que existem dificuldades na venda directa da gasolina leve produzida nas unidades 3000 e 1200, este estudo mostra que a separação do isopentano da gasolina leve por destilação, representa um valor para a refinaria de 27.501.788 € em produto de topo, com um elevado valor do índice de octanas. O total dos custos energéticos envolvidos é de 2.628.104 €.The objective of this study was to analyze the feasibility of recovering isopentane from light petroleum produced in the units 1200 and 3000 of the refinery in Matosinhos. Isopentane which has a research octane number (RON) of 92.3 may be subsequently incorporated into gasoline. In order to perform this separation a distillation process was chosen. The simulation of this process was carried out for a distillation column existing in the refinery and using the Aspen Plus simulator software. Since recovered isopentane will need to be stored for a period of 10 to 15 days, the design for a suitable storage tank is also included. In the simulation performed for the separation of isopentane from the light petroleum produced in unit 3000, and for a throughput of 408 tonnes/day (450 short tons/day), it was considered that the feed entered the column at a pressure og 4.5 bar and a temperature of 20ºC. The pressure in the condenser and reboiler were 4 bar and 4.2 bar, respectively (data supplied by the refinery in Matosinhos). The reflux ratio used was 10 and the feed material entered the distillation column in plate 64. Under these conditions it was possible to recover 93,5% of isopentane, resulting in 115.9 tonnes/day of overhead product with a RON of 90.3. In the simulation performed of the separation of isopentane using unit 1200 light petroleum, a throughput of 266 tonnes/day (293 tons/day) was considered, using the same test conditions as the previous simulation, except for the reflux ratio, which was 15. The results show a recovery of 92,4% of isopentane and the production of 30.1 tonnes/day of overhead product with a RON of 91.2. The simulation resulting from mixing the products from units 3000 and 1200 used a flw rate of 674 tonnes/day (743 tons/day) and the same operating conditions as for unit 3000. This resulted in the recovery of 91,2% of isopentane and an overhead product production rate of 142.4 tons/day with a RON of 90.5. The design of the tank resulted in a required volume of 6100 m3 at a cost of € 1 153 112. In a scenario where there are difficulties in the direct sales of light petroleum produced in units 3000 and 1200, this study shows that the separation by distillation of isopentane from light petroleum represents a value for the refinery of € 27 501 788 in high octane value overhead product. The total energy cost involved is € 2.628.104

    A web portal for the certification of open source software

    Get PDF
    Lecture Notes in Computer Science 7791, 2014This paper presents a web portal for the certification of open source software. The portal aims at helping programmers in the internet age, when there are (too) many open source reusable libraries and tools available. Our portal offers programmers a web-based and easy setting to analyze and certify open source software, which is a crucial step to help programmers choosing among many available alternatives, and to get some guarantees before using one piece of software. The paper presents our first prototype of such web portal. It also describes in detail a domain specific language that allows programmers to describe with a high degree of abstraction specific open source software certifications. The design and implementation of this language is the core of the web portal.(undefined

    Zipper-based modular and deforested computations

    Get PDF
    In this paper we present a methodology to implement multiple traversal algorithms in a functional programming setting. The implementations we obtain s of highly modular and intermediate structure free programs, that rely on the concept of functional zippers to navigate on data structures.Even though our methodology is developed and presented under Haskell, a lazy functional language, we do not make essential use of laziness. This is an essential difference with respect to other attribute grammar embeddings. This also means that an approach similar to ours can be followed in a strict functional setting such as Ocaml, for example.In the paper, our technique is applied to a significant number of problems that are well-known to the functional programming community, demonstrating its practical interest.- (undefined

    Zipper-based attribute grammars and their extensions

    Get PDF
    Lecture Notes in Computer Science Volume 8129, 2013.Attribute grammars are a suitable formalism to express complex software language analysis and manipulation algorithms, which rely on multiple traversals of the underlying syntax tree. Recently, Attribute Grammars have been extended with mechanisms such as references and high-order and circular attributes. Such extensions provide a powerful modular mechanism and allow the specification of complex fix-point computations. This paper defines an elegant and simple, zipper-based embedding of attribute grammars and their extensions as first class citizens. In this setting, language specifications are defined as a set of independent, off-the-shelf components that can easily be composed into a powerful, executable language processor. Several real examples of language specification and processing programs have been implemented in this setting

    Generating attribute grammar-based bidirectional transformations from rewrite rules

    Get PDF
    Higher order attribute grammars provide a convenient means for specifying uni-directional transformations, but they provide no direct support for bidirectional transformations. In this paper we show how rewrite rules (with non-linear right hand sides) that specify a forward/get transformation can be inverted to specify a partial backward/put transformation. These inverted rewrite rules can then be extended with additional rules based on characteristics of the source language grammar and forward transformations to create, under certain circumstances, a total backward transformation. Finally, these rules are used to generate attribute grammar specifications implementing both transformations.This work is partly funded by the following projects: European Regional Development Fund (ERDF) through the program COMPETE, project reference FCOMP-01-0124-FEDER-020532, by the North Portugal Regional Operational Programme (ON.2 - O Novo Norte); under the National Strategic Reference Framework (NSRF), through the ERDF, project reference RL3 SENSING NORTE-07-0124-FEDER-000058; by the Portuguese Government through FCT (Foundation for Science and Technology); by the U.S. National Science Foundation (NSF) award No. 0905581 and 1047961; and by the FLAD/NSF program Portugal-U.S. Research Networks 2011

    BiYacc: Roll your parser and reflective printer into one

    Get PDF
    In: A. Cunha, E. Kindler (eds.): Proceedings of the Fourth International Workshop on Bidirectional Transformations (Bx 2015), L’Aquila, Italy, July 24, 2015, published at http://ceur-ws.orgLanguage designers usually need to implement parsers and printers. Despite being two related programs, in practice they are designed and implemented separately. This approach has an obvious disadvantage: as a language evolves, both its parser and printer need to be separately revised and kept synchronised. Such tasks are routine but complicated and error-prone. To facilitate these tasks, we propose a language called BiYacc, whose programs denote both a parser and a printer. In essence, BiYacc is a domain-specific language for writing putback-based bidirectional transformations — the printer is a putback transformation, and the parser is the corresponding get transformation. The pairs of parsers and printers generated by BiYacc are thus always guaranteed to satisfy the usual round-trip properties. The highlight that distinguishes this reflective printer from others is that the printer — being a putback transformation — accepts not only an abstract syntax tree but also a string, and produces an updated string consistent with the given abstract syntax tree. We can thus make use of the additional input string, with mechanisms such as simultaneous pattern matching on the view and the source, to provide users with full control over the printing-strategies.JSPS -Japan Society for the Promotion of Science(25240009

    Memoized zipper-based attribute grammars

    Get PDF
    Attribute Grammars are a powerfull, well-known formalism to implement and reason about programs which, by design, are conveniently modular.In this work we focus on a state of the art Zipper-based embedding of Attribute Grammars and further improve its performance through controlling attribute (re)evaluation by using memoization techniques. We present the results of our optimization by comparing their impact in various implementations of different, well-studied Attribute Grammars.- (undefined

    A purely functional combinator language for software quality assessment

    Get PDF
    Quality assessment of open source software is becoming an important and active research area. One of the reasons for this recent interest is the consequence of Internet popularity. Nowadays, programming also involves looking for the large set of open source libraries and tools that may be reused when developing our software applications. In order to reuse such open source software artifacts, programmers not only need the guarantee that the reused artifact is certified, but also that independently developed artifacts can be easily combined into a coherent piece of software. In this paper we describe a domain specific language that allows programmers to describe in an abstract level how software artifacts can be combined into powerful software certification processes. This domain specific language is the building block of a web-based, open-source software certification portal. This paper introduces the embedding of such domain specific language as combinator library written in the Haskell programming language. The semantics of this language is expressed via attribute grammars that are embedded in Haskell, which provide a modular and incremental setting to define the combination of software artifacts

    Smelling faults in spreadsheets

    Get PDF
    Despite being staggeringly error prone, spreadsheets are a highly flexible programming environment that is widely used in industry. In fact, spreadsheets are widely adopted for decision making, and decisions taken upon wrong (spreadsheet-based) assumptions may have serious economical impacts on businesses, among other consequences. This paper proposes a technique to automatically pinpoint potential faults in spreadsheets. It combines a catalog of spreadsheet smells that provide a first indication of a potential fault, with a generic spectrum-based fault localization strategy in order to improve (in terms of accuracy and false positive rate) on these initial results. Our technique has been implemented in a tool which helps users detecting faults.To validate the proposed technique, we consider a wellknown and well-documented catalog of faulty spreadsheets. Our experiments yield two main results: we were able to distinguish between smells that can point to faulty cells from smells and those that are not capable of doing so; and we provide a technique capable of detecting a significant number of errors: two thirds of the cells labeled as faulty are in fact (documented) errors
    corecore